conditional interpretation
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Conditional Interpretations
Xu, Xinyue, Qin, Yi, Mi, Lu, Wang, Hao, Li, Xiaomeng
Existing methods, such as concept bottleneck models (CBMs), have been successful in providing concept-based interpretations for black-box deep learning models. They typically work by predicting concepts given the input and then predicting the final class label given the predicted concepts. However, (1) they often fail to capture the high-order, nonlinear interaction between concepts, e.g., correcting a predicted concept (e.g., "yellow breast") does not help correct highly correlated concepts (e.g., "yellow belly"), leading to suboptimal final accuracy; (2) they cannot naturally quantify the complex conditional dependencies between different concepts and class labels (e.g., for an image with the class label "Kentucky Warbler" and a concept "black bill", what is the probability that the model correctly predicts another concept "black crown"), therefore failing to provide deeper insight into how a black-box model works. In response to these limitations, we propose Energy-based Concept Bottleneck Models (ECBMs). Our ECBMs use a set of neural networks to define the joint energy of candidate (input, concept, class) tuples. With such a unified interface, prediction, concept correction, and conditional dependency quantification are then represented as conditional probabilities, which are generated by composing different energy functions. Our ECBMs address both limitations of existing CBMs, providing higher accuracy and richer concept interpretations. Empirical results show that our approach outperforms the state-of-the-art on real-world datasets.
- North America > United States > Kentucky (0.24)
- Asia > China > Hong Kong (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.50)
A General Framework for Modelling Conditional Reasoning -- Preliminary Report
Casini, Giovanni, Straccia, Umberto
Conditionals are generally considered the backbone of human (and AI) reasoning: the "if-then" connection between two propositions is the stepping stone of arguments and a lot of the research effort in formal logic has focused on this kind of connection. A conditional connection satisfies different properties according to the kind of arguments it is used for. The classical material implication is appropriate for modelling the "ifthen" connection as it is used in Mathematics, but the equivalence between the material implication A B and A B is not appropriate for many other contexts.
- Africa > South Africa > Western Cape > Cape Town (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Italy > Tuscany > Pisa Province > Pisa (0.04)